Coupled Vision and Inertial Navigation for Pin - Point Landing
نویسندگان
چکیده
Our research task in NASA’s Mars Technology Program focuses on terrain relative state estimation during planetary descent to enable pin-point landing. In this task we have developed estimation algorithms that fuse inertial and visual measurements as well as computer vision algorithms that automatically extract landmark locations and terrain feature tracks through a sequence of images. These algorithms have been tested extensively with simulated data and data collected during multiple field test campaigns. This paper describes our recent development and testing of an algorithm for navigation state estimation during planetary descent to enable precision landing. The algorithm automatically produces 2D-to-3D correspondences between descent images and a surface map and 2D-to-2D correspondences through a sequence of descent images. These correspondences are combined with inertial measurements in an extended Kalman filter that estimates lander position, velocity and attitude as well as the time varying biases of the inertial measurements. The filter tightly couples inertial and camera measurements in a resource-adaptive and hence real-time capable fashion. Results from a sounding rocket test, covering the dynamic profile of typical planetary landing scenarios, show estimation errors of magnitude 0.16 m/s in velocity and 6.4 m in position at touchdown. These results vastly improve current state of the art and meet the requirements of future planetary exploration missions.
منابع مشابه
Vision-aided inertial navigation for pin-point landing using observations of mapped landmarks
In this paper we describe an Extended Kalman Filter (EKF) algorithm for estimating the pose and velocity of a spacecraft during Entry, Descent and Landing (EDL). The proposed estimator combines measurements of rotational velocity and acceleration from an Inertial Measurement Unit (IMU) with observations of a priori Mapped Landmarks (MLs), such as craters or other visual features, that exist on ...
متن کاملVision-Aided Inertial Navigation for Precise Planetary Landing: Analysis and Experiments
In this paper, we present the analysis and experimental validation of a vision-aided inertial navigation algorithm for planetary landing applications. The system employs tight integration of inertial and visual feature measurements to compute accurate estimates of the lander’s terrain-relative position, attitude, and velocity in real time. Two types of features are considered: mapped landmarks,...
متن کاملOptical Navigation System for Pin-Point Lunar Landing
Major space agencies have an increasing interest in highly accurate (200 m) autonomous landing on the Moon. Inertial-only navigation is not compatible with this challenging requirement. The techniques currently investigated rely on vision-based navigation. A first approach consists in tracking features between sequences of images in order to measure the angular rate as well as the direction of ...
متن کاملImprovement of Navigation Accuracy using Tightly Coupled Kalman Filter
In this paper, a mechanism is designed for integration of inertial navigation system information (INS) and global positioning system information (GPS). In this type of system a series of mathematical and filtering algorithms with Tightly Coupled techniques with several objectives such as application of integrated navigation algorithms, precise calculation of flying object position, speed and at...
متن کاملModel Based Vision for Aircraft Position Determination
Passive vision techniques to estimate aircraft position during landing can be developed using a runway model, images acquired by onboard imaging sensor, orientation information provided by the inertial navigation system and the position estimate provided by devices such as the global positioning system. In this paper expressions are derived for the sensitivity of point features to errors in the...
متن کامل